3d Semantic Segmentation


3D Semantic Segmentation is a computer vision task that involves dividing a 3D point cloud or 3D mesh into semantically meaningful parts or regions. The goal of 3D semantic segmentation is to identify and label different objects and parts within a 3D scene, which can be used for applications such as robotics, autonomous driving, and augmented reality.

Benchmarking Deep Learning Models for Aerial LiDAR Point Cloud Semantic Segmentation under Real Acquisition Conditions: A Case Study in Navarre

Add code
Mar 23, 2026
Viaarxiv icon

Uncertainty-aware Prototype Learning with Variational Inference for Few-shot Point Cloud Segmentation

Add code
Mar 20, 2026
Viaarxiv icon

Reconstruction Matters: Learning Geometry-Aligned BEV Representation through 3D Gaussian Splatting

Add code
Mar 19, 2026
Viaarxiv icon

SegVGGT: Joint 3D Reconstruction and Instance Segmentation from Multi-View Images

Add code
Mar 20, 2026
Viaarxiv icon

DriveTok: 3D Driving Scene Tokenization for Unified Multi-View Reconstruction and Understanding

Add code
Mar 19, 2026
Viaarxiv icon

OnlinePG: Online Open-Vocabulary Panoptic Mapping with 3D Gaussian Splatting

Add code
Mar 19, 2026
Viaarxiv icon

Perceptio: Perception Enhanced Vision Language Models via Spatial Token Generation

Add code
Mar 19, 2026
Viaarxiv icon

TCATSeg: A Tooth Center-Wise Attention Network for 3D Dental Model Semantic Segmentation

Add code
Mar 17, 2026
Viaarxiv icon

UniSem: Generalizable Semantic 3D Reconstruction from Sparse Unposed Images

Add code
Mar 18, 2026
Viaarxiv icon

SegFly: A 2D-3D-2D Paradigm for Aerial RGB-Thermal Semantic Segmentation at Scale

Add code
Mar 18, 2026
Viaarxiv icon